Asymptotic Prediction Error Variance for Feedforward Neural Networks

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimization-based learning with bounded error for feedforward neural networks

An optimization-based learning algorithm for feedforward neural networks is presented, in which the network weights are determined by minimizing a sliding-window cost. The algorithm is particularly well suited for batch learning and allows one to deal with large data sets in a computationally efficient way. An analysis of its convergence and robustness properties is made. Simulation results con...

متن کامل

Optimized Learning with Bounded Error for Feedforward Neural Networks

A learning algorithm for feedforward neural networks is presented that is based on a parameter estimation approach. The algorithm is particularly well-suited for batch learning and allows one to deal with large data sets in a computationally efficient way. An analysis of its convergence and robustness properties is made. Simulation results confirm the effectiveness of the algorithm and its adva...

متن کامل

Conjugate descent formulation of backpropagation error in feedforward neural networks

The feedforward neural network architecture uses backpropagation learning to determine optimal weights between different interconnected layers. This learning procedure uses a gradient descent technique applied to a sum-of-squares error function for the given inputoutput pattern. It employs an iterative procedure to minimise the error function for a given set of patterns, by adjusting the weight...

متن کامل

Time series prediction by feedforward neural networks—is it difficult?

The difficulties that a neural network faces when trying to learn from a quasiperiodic time series are studied analytically using a teacher–student scenario where the random input is divided into two macroscopic regions with different variances, 1 and 1/γ 2 (γ 1). The generalization error is found to decrease as g ∝ exp(−α/γ 2), where α is the number of examples per input dimension. In contradi...

متن کامل

Feedforward Neural Networks

Here x is an input, y is a “label”, v ∈ Rd is a parameter vector, and f(x, y) ∈ Rd is a feature vector that corresponds to a representation of the pair (x, y). Log-linear models have the advantage that the feature vector f(x, y) can include essentially any features of the pair (x, y). However, these features are generally designed by hand, and in practice this is a limitation. It can be laborio...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IFAC-PapersOnLine

سال: 2020

ISSN: 2405-8963

DOI: 10.1016/j.ifacol.2020.12.1310